A neuromorphic visual sensor can recognize moving objects and predict their path
The new smart sensor uses embedded information to detect motion in a single video frame
- Date:
- April 17, 2023
- Source:
- Aalto University
- Summary:
- The new smart sensor uses embedded information to detect motion in a single video frame.
- Share:
A new bio-inspired sensor can recognise moving objects in a single frame from a video and successfully predict where they will move to. This smart sensor, described in a Nature Communications paper, will be a valuable tool in a range of fields, including dynamic vision sensing, automatic inspection, industrial process control, robotic guidance, and autonomous driving technology.
Current motion detection systems need many components and complex algorithms doing frame-by-frame analyses, which makes them inefficient and energy-intensive. Inspired by the human visual system, researchers at Aalto University have developed a new neuromorphic vision technology that integrates sensing, memory, and processing in a single device that can detect motion and predict trajectories.
At the core of their technology is an array of photomemristors, electrical devices that produce electric current in response to light. The current doesn't immediately stop when the light is switched off. Instead, it decays gradually, which means that photomemristors can effectively 'remember' whether they've been exposed to light recently. As a result, a sensor made from an array of photomemristors doesn't just record instantaneous information about a scene, like a camera does, but also includes a dynamic memory of the preceding instants.
'The unique property of our technology is its ability to integrate a series of optical images in one frame,' explains Hongwei Tan, the research fellow who led the study. 'The information of each image is embedded in the following images as hidden information. In other words, the final frame in a video also has information about all the previous frames. That lets us detect motion earlier in the video by analysing only the final frame with a simple artificial neural network. The result is a compact and efficient sensing unit.'
To demonstrate the technology, the researchers used videos showing the letters of a word one at a time. Because all the words ended with the letter 'E', the final frame of all the videos looked similar. Conventional vision sensors couldn't tell whether the 'E' on the screen had appeared after the other letters in 'APPLE' or 'GRAPE'. But the photomemristor array could use hidden information in the final frame to infer which letters had preceded it and predict what the word was with nearly 100% accuracy.
In another test, the team showed the sensor videos of a simulated person moving at three different speeds. Not only was the system able to recognize motion by analysing a single frame, but it also correctly predicted the next frames.
Accurately detecting motion and predicting where an object will be are vital for self-driving technology and intelligent transport. Autonomous vehicles need accurate predictions of how cars, bikes, pedestrians, and other objects will move in order to guide their decisions. By adding a machine learning system to the photomemristor array, the researchers showed that their integrated system can predict future motion based on in-sensor processing of an all-informative frame.
'Motion recognition and prediction by our compact in-sensor memory and computing solution provides new opportunities in autonomous robotics and human-machine interactions,' says Professor Sebastiaan van Dijken. 'The in-frame information that we attain in our system using photomemristors avoids redundant data flows, enabling energy-efficient decision-making in real time.'
Story Source:
Materials provided by Aalto University. Note: Content may be edited for style and length.
Journal Reference:
- Hongwei Tan, Sebastiaan van Dijken. Dynamic machine vision with retinomorphic photomemristor-reservoir computing. Nature Communications, 2023; 14 (1) DOI: 10.1038/s41467-023-37886-y
Cite This Page: